Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Collaborative recommendation algorithm based on deep graph neural network
Runchao PAN, Qishan YU, Hongfei XIONG, Zhihui LIU
Journal of Computer Applications    2023, 43 (9): 2741-2746.   DOI: 10.11772/j.issn.1001-9081.2022091361
Abstract416)   HTML34)    PDF (1539KB)(348)       Save

For the problem of over-smoothing in the existing recommendation algorithms based on Graph Neural Network (GNN), a collaborative filtering recommendation algorithm based on deep GCN was proposed, namely Deep NGCF (Deep Neural Graph Collaborative Filtering). In the algorithm, the initial residual connection and identity mapping were introduced into GNN, which avoided GNN from falling into over-smoothing after multiple graph convolution operations. Firstly, the initial embeddings of users and items were obtained through their interaction history. Next, in aggregation and propagation layer, collaborative signals of users and items in different stages were obtained with the use of initial residual connection and identity mapping. Finally, score prediction was performed according to the linear representation of all collaborative signals. In addition, to further improve the flexibility and recommendation performance of the model, the weights were set in the initial residual connection and identity mapping for adjustment. In order to verify the feasibility and effectiveness of Deep NGCF algorithm, experiments were conducted on datasets Gowalla, Yelp-2018 and Amazon-book. The results show that compared with the existing GNN recommendation algorithm such as Graph Convolutional Matrix Completion (GCMC) and Neural Graph Collaborate Filtering (NGCF), Deep NGCF algorithm achieves the best results on recall and Normalized Discounted Cumulative Gain (NDCG), thereby verifying the effectiveness of the proposed algorithm.

Table and Figures | Reference | Related Articles | Metrics
Lightweight coverage hole detection algorithm based on relative position of link intersections
HAN Yulao, FANG Dingyi
Journal of Computer Applications    2020, 40 (9): 2698-2705.   DOI: 10.11772/j.issn.1001-9081.2019122115
Abstract342)      PDF (1090KB)(299)       Save
Coverage holes in Wireless Sensor Network (WSN) cause poor network performance and low network service quality. To solve these problems, a Coverage Hole Detection Algorithm based on Relative Position of Intersections (CHDARPI) was proposed. First, the hole boundary nodes were defined and Relative Position of Intersections (RPI) of the link between adjacent boundary nodes was calculated. Then, the starting node of hole detection was selected based on the policy of Number of Incomplete Coverage Intersections (NICI) priority, which guaranteed the concurrent detection of the connected coverage holes. Finally, in the process of coverage hole detection, the message of hole detection was limited within the hole boundary nodes, and the forwarding strategies under different scenarios were formulated according to the sizes of the direction angles of the forwarding nodes, which ensured the efficiency of coverage hole detection. The simulation results show that, compared with the existing Distributed Coverage Hole Detection algorithm (DCHD) and Distributed Least Polar Angle algorithm (DLPA), the proposed CHDARPI decreases the average detection time and detection energy consumption by at least 15.2% and 16.7%.
Reference | Related Articles | Metrics
Multi-objective path planning algorithm for mobile charging devices jointing wireless charging and data collection
HAN Yulao, FANG Dingyi
Journal of Computer Applications    2020, 40 (6): 1745-1750.   DOI: 10.11772/j.issn.1001-9081.2019111933
Abstract328)      PDF (594KB)(388)       Save
The limited resources of wireless sensor network nodes cause the poor completeness and timeliness of data collection. To solve these problems, a multi-objective path planning model for Mobile Charging Devices (MCD) jointing mobile charging and data collection was established, and a Path Planning algorithm based on Greedy Strategy for MCD jointing wireless charging and data collection (PPGS) was proposed. Firstly, the monitoring area was divided into many seamless regular hexagon cells, so as to effectively reduce the number of cells visited by MCD. Then, the parameters such as the node energy and the quantity of data collection were predicted by using the Markov model, and the anchor minimum stopping time and anchor maximum waiting time for MCD were predicted based on the above. Compared with the existing Delay-Constrained Mobile Energy Charging algorithm (DCMEC) and Mobile Device Scheduling Algorithm and Grid-Based Algorithm (GBA+MDSA), the proposed algorithm has lower complexity and does not need to know the actual location information of nodes and anchors in advance. The simulation results show that, the proposed PPGS can guarantee the completeness and timeliness of data collection with a small number of MCD in wireless sensor network.
Reference | Related Articles | Metrics
Self-driving tour route mining based on sparse trajectory clustering
YANG Fengyi, MA Yupeng, BAO Hengbin, HAN Yunfei, MA Bo
Journal of Computer Applications    2020, 40 (4): 1079-1084.   DOI: 10.11772/j.issn.1001-9081.2019081467
Abstract568)      PDF (1419KB)(583)       Save
Aiming at the difficulty of constructing real tour routes from sparse refueling trajectories of self-driving tourists,a sparse trajectory clustering algorithm based on semantic representation was proposed to mine popular self-driving tour routes. Different from traditional trajectory clustering algorithms based on trajectory point matching,in this algorithm, the semantic relationships between different trajectory points were considered and the low-dimensional vector representation of the trajectory was learned. Firstly,the neural network language model was used to learn the distributed vector representation of the gas stations. Then,the average value of all the station vectors in each trajectory was taken as the vector representation of this trajectory. Finally,the classical k-means algorithm was used to cluster the trajectory vectors. The final visualization results show that the proposed algorithm mines two popular self-driving tour routes effectively.
Reference | Related Articles | Metrics
Downlink resource scheduling based on weighted average delay in long term evolution system
WANG Yan, MA Xiurong, SHAN Yunlong
Journal of Computer Applications    2019, 39 (5): 1429-1433.   DOI: 10.11772/j.issn.1001-9081.2018081734
Abstract471)      PDF (738KB)(243)       Save
Aiming at the transmission performance requirements of Real-Time (RT) services and Non-Real-Time (NRT) services for multi-user in the downlink transmission of Long Term Evolution (LTE) mobile communication system, an improved Modified Largest Weighted Delay First (MLWDF) scheduling algorithm based on weighted average delay was proposed. On the basis of considering both channel perception and Quality of Service (QoS) perception, a weighted average dealy factor reflecting the state of the user buffer was utilized, which was obtained by the average delay balance of the data to be transmitted and the transmitted data in the user buffer. The RT service with large delay and traffic is prioritized, which improves the user performance experience.Theoretical analysis and link simulation show that the proposed algorithm improves the QoS performance of RT services on the basis of ensuring the delay and fairness of each service. The packet loss rate of RT service of the proposed algorithm decreased by 53.2%, and the average throughput of RT traffic increased by 44.7% when the number of users achieved 50 compared with MLWDF algorithm. Although the throughput of NRT services are sacrificed, it is still better than VT-MLWDF (Virtual Token MLWDF) algorithm. The theoretical analysis and simulation results show that transmission performances and QoS are superior to the comparison algorithm.
Reference | Related Articles | Metrics
Software defined network path security based on Hash chain
LI Zhaobin, LIU Zeyi, WEI Zhanzhen, HAN Yu
Journal of Computer Applications    2019, 39 (5): 1368-1373.   DOI: 10.11772/j.issn.1001-9081.2018091857
Abstract363)      PDF (1058KB)(268)       Save
For the security problem that the SDN (Software Defined Network) controller can not guarantee the network strategy issued by itself to be correctly executed on the forwarding devices, a new forwarding path monitoring security solution was proposed. Firstly, based on the overall view capability of the controller, a path credential interaction processing mechanism based on OpenFlow was designed. Secondly, Hash chain and message authentication code were introduced as the key technologies for generating and processing the forwarding path credential information. Thirdly, on this basis, Ryu controller and Open vSwitch open-source switch were deeply optimized,with credential processing flow added, constructing a lightweight path security mechanism. The test results show that the proposed mechanism can effectively guarantee the security of data forwarding path, and its throughput consumption is reduced by more than 20% compared with SDNsec, which means it is more suitable for the network environment with complex routes, but its fluctuates of latency and CPU usage are more than 15%, which needs further optimization.
Reference | Related Articles | Metrics
Multiple extended target tracking algorithm for nonliear system
HAN Yulan, HAN Chongzhao
Journal of Computer Applications    2019, 39 (5): 1318-1324.   DOI: 10.11772/j.issn.1001-9081.2018092020
Abstract740)      PDF (1131KB)(461)       Save
Most of current extended target tracking algorithms assume that its system is linear Gaussian system. To track multiple extended targets for nonlinear Gaussian system, an multiple extended target tracking algorithm using particle filter to jointly estimate target state and association hypothesis was proposed. Firstly, the idea of joint estimation of the multiple extended target state and association hypothesis was proposed, which avoided mutual constraints in estimating target state and data association. Then, based on extended target state evolution model and measurement model, a joint proposal distribution function for multiple extended target and association hypothesis was established, and the Bayesian framework for the joint estimation was implemented by particle filtering. Finally, to avoid the dimension disaster problem in the implementation of the particle filter, the generation and evolution of the multiple extended target combined state particles were decomposed into that of the individual target state particles, and the particle set of each target was resampled according to the weight association with it, so that each target retained the particles with better state estimation while suppressing the poor part of target state estimation. Simulation results show that, in comparison with the Gaussian-mixture implementation of extended target probability hypothesis density filter and the sequential Monte Carlo implementation of that, the estimation accuracy of the target state is improved, and the Jaccard distance of shape estimation is reduced by approximately 30% and 20% respectively. The proposed algorithm is more suitable for multiple extended target tracking of the nonlinear system.
Reference | Related Articles | Metrics
Nonlocal self-similarity based low-rank sparse image denoising
ZHANG Wenwen, HAN Yusheng
Journal of Computer Applications    2018, 38 (9): 2696-2700.   DOI: 10.11772/j.issn.1001-9081.2018020310
Abstract1879)      PDF (1002KB)(606)       Save
Focusing on the issue that many image denoising methods are easy to lose detailed information when removing noise, a nonlocal self-similarity based low-rank sparse image denoising method was proposed. Firstly, external natural clean image patches were put into groups by a method of block matching based on Mahalanobis Distance (MD), and then a patch group based Gaussian Mixture Model (GMM) was developed to learn the nonlocal self-similarity prior. Secondly, based on the Stable Principal Component Pursuit (SPCP) method, the noise image matrix was decomposed into low-rank, sparse and noise parts, while the sparse matrix contained useful information. Finally, the global objective function was minimized to achieve denoising. The experimental results show that compared to the previous denoising methods, such as EPLL (Expected Patch Log Likelihood), NCSR (Non-locally Centralized Sparse Representation), PCLR (external Patch prior guided internal CLusteRing), etc., the proposed method has better results in Peak Signal-to-Noise Ratio (PSNR) and Structure self-SIMilarity (SSIM), speed, denoising effect and detail retention ability.
Reference | Related Articles | Metrics
Novel virtual boundary detection method based on deep learning
LAI Chuanbin, HAN Yuexing, GU Hui, WANG Bing
Journal of Computer Applications    2018, 38 (11): 3211-3215.   DOI: 10.11772/j.issn.1001-9081.2018041347
Abstract687)      PDF (875KB)(436)       Save
Traditional edge detection methods can not accurately detect the Virtual Boundary (VB) between different regions in materials microscopic images. In order to solve this problem, a virtual boundary detection model based on Convolutional Neural Network (CNN) called Virtual Boundary Net (VBN) was proposed. The VGGNet (Visual Geometry Group Net) deep learning model was simplified, and dropout and Adam algorithms were applied in the training process. An image patch centered on each pixel in the image was extracted as the input, and the class of the image patch was output to decide whether the center pixel belongs to the virtual boundary or not. In the experiment of virtual boundary detection for two kinds of material images, the average detection precision of this method reached 92.5%, and the average recall rate reached 89.5%. The experimental results prove that the VBN can detect the virtual boundary in the image accurately and effectively, which is an alternative method to low-efficient manual analysis.
Reference | Related Articles | Metrics
Estimation method for RFID tags based on rough and fine double estimation
DING Jianli, HAN Yuchao, WANG Jialiang
Journal of Computer Applications    2017, 37 (9): 2722-2727.   DOI: 10.11772/j.issn.1001-9081.2017.09.2722
Abstract561)      PDF (1041KB)(429)       Save
To solve the contradiction between the estimation accuracy and the calculation amount of the RFID tag estimation method, and the instability of the estimation method performance caused by the randomness of the tag reading process in the field of aviation logistics networking information gathering. Based on the idea of complementary advantages, a method for estimating the number of RFID tags based on rough and fine estimation was proposed. By modeling and analyzing the tag reading process of framed ALOHA algorithm, the mathematical model between the average number of tags in the collision slot and the proportion of the collision slot was established. Rough number estimation based on the model was made, and then, according to the value of rough estimation, the reliability of rough estimation was evaluated. The Maximum A Posteriori (MAP) estimation algorithm based on the value of rough estimation as priori knowledge was used to improve the estimation accuracy. Compared to the original maximum posteriori probability estimation algorithm, the search range can be reduced up to 90%. The simulation results show that, the average error of the RFID tag number estimation based on rough and fine estimation is 3.8%, the stability of the estimation method is significantly improved, and the computational complexity is greatly reduced. The proposed algorithm can be effectively applied to the information collection process aviation logistics networking.
Reference | Related Articles | Metrics
Text keyword extraction method based on word frequency statistics
LUO Yan, ZHAO Shuliang, LI Xiaochao, HAN Yuhui, DING Yafei
Journal of Computer Applications    2016, 36 (3): 718-725.   DOI: 10.11772/j.issn.1001-9081.2016.03.718
Abstract1275)      PDF (1022KB)(962)       Save
Focused on low efficiency and poor accuracy of the traditional TF-IDF (Term Frequency-Inverse Document Frequency) algorithm in keyword extraction, a text keyword extraction method based on word frequency statistics was proposed. Firstly, the formula of the same frequency words in text was deduced according to Zipf's law; secondly, the proportion of each frequency word in text was determined in accordance with the formula of the same frequency words, most of which were low-frequency words; finally, the TF-IDF algorithm based on word frequency statistics was proposed by applying the word frequency statistics law to keyword extraction. Simulation experiments were conducted on Chinese and English text experiment data sets. The average relative error of the formula of the same frequency words was not more than 0.05; the maximum absolute error of the proportion of each frequency word in text was 0.04. Compared with the traditional TF-IDF algorithm, the average precision, the average recall and the average F1-measure of the TF-IDF algorithm based on word frequency statistics were increased respectively, while the average runtime was decreased. The simulation results show that in text keyword extraction, the TF-IDF algorithm based on word frequency statistics is superior to the traditional TF-IDF algorithm in precision, recall and F1-measure, and it can effectively reduce the runtime in keyword extraction.
Reference | Related Articles | Metrics
Scene classification based details preserving histogram equalization
HU Jing MA Xiaofeng SHENG Weixing HAN Yubing
Journal of Computer Applications    2014, 34 (7): 2001-2004.   DOI: 10.11772/j.issn.1001-9081.2014.07.2001
Abstract128)      PDF (770KB)(379)       Save

Due to the swallow and over-enhancement problems of traditional histogram equalization, an improved histogram equalization algorithm combining scene classification and details preservation was proposed. In this algorithm, images were classified according to their histogram features. The parameter of piecewise histogram equalization was optimized according to the scene classification and the characteristics of image histogram. The complexity of the improved algorithm is only O(L).L is the level of image grayscale, and equals to 256 here. The improved algorithm has the small amount of computation and solves the swallow and over-enhancement problems of traditional histogram equalization. The results from TI (Texas Instruments) DM648 platform show the algorithm can be used for real-time video image enhancement.

Reference | Related Articles | Metrics
Generalized incremental manifold learning algorithm based on local smoothness
ZHOU Xue-yan HAN Jian-min ZHAN Yu-bin
Journal of Computer Applications    2012, 32 (06): 1670-1673.   DOI: 10.3724/SP.J.1087.2012.01670
Abstract851)      PDF (711KB)(416)       Save
Most of existing manifold learning algorithms are not capable of dealing with new arrival samples. Although some incremental algorithms are developed via extending a specified manifold learning algorithm, most of them have some disadvantages more or less. In this paper, a novel and more Generalized Incremental Manifold Learning algorithm based on local smoothness is proposed (GIML). GIML algorithm first extracts the local smoothness structure of data set via local PCA. Then the optimal linear transformation, which transforms the local smoothness structure of new arrival sample’s neighborhood to its corresponded low-dimensional embedding coordinates, is computed. Finally the low-dimensinal embedding coordinates of new arrival samples are obtained by the optimal transformation. Extensive and systematic experiments are conducted on both artificial and real image data sets. Experimental results demonstrate that our GIML algotithm is an effective incremental manifold learning algorithm and outperforms other existing algirthms.
Related Articles | Metrics
GPU-based parallel implementation of FDK algorithm for cone-beam CT
HAN Yu YAN Bin YU Chao-qun LI Lei LI Jian-xin
Journal of Computer Applications    2012, 32 (05): 1407-1410.  
Abstract1180)      PDF (2157KB)(862)       Save
To improve the reconstruction speed of the FDK algorithm, this paper presented a fast algorithm based on the graphics processing unit (GPU). The method acquired higher computational efficiency through more careful optimization techniques, including reasonable mode of thread assigning, collecting and pre-computing the variables which were irrelevant with the voxel and the decreasing of number of global memory accesses. The simulation results show that while the fully optimized algorithm makes no precision reduction, the reconstruction time for 2563 is only 0.5 seconds and for 5123 is only 2.5 seconds, which is a big advance in comparison with the latest research findings.
Reference | Related Articles | Metrics
Weighted improvement method based on local gray-level model of active shape model
HAN Yu-feng WANG Xiao-lin
Journal of Computer Applications    2011, 31 (12): 3392-3394.  
Abstract909)      PDF (592KB)(599)       Save
In the searching of ASM algorithm for target point, only surrounding local gray-level information of fixed point is adopted. Then, two points is often confused, which have both similar gray-level and widely different texture details. Consequently, the accuracy of positioning is not ensured. The paper suggests that the real feature point probability of bilateral candidate points in the normal direction decreases in turn, so Gaussian distribution is brought in to represent the probability. Thus, better candidate points in the target image can be searched and the accuracy of positioning is improved.
Related Articles | Metrics
Nonlinear discriminant K-means clustering on manifold
GAO Li-pin ZHOU Xue-yan ZHAN Yu-bin
Journal of Computer Applications    2011, 31 (12): 3247-3251.  
Abstract1045)      PDF (921KB)(530)       Save
In real applications in pattern recognition and computer vison, high dimensional data always lie approximately on a low dimensional manifold. How to improve the performance of clustering algorithm on high dimensional data by using the manifold structure is a research hotspot in machine learning and data mining community. In this paper, a novel clustering algorithm called Nonlinear Discriminant K-means Clustering (NDisKmeans), which has taken the manifold structure of high dimensional into account, is proposed. By introducing the spectracl regularization technology, NDisKmeans first represents the desired low dimensional coordinates as linear combinations of smooth vectors predefined on the data manifold; then maximizes the ratio between inter-clusters scatter and total scatter to cluster the high dimensional data. A convergent iterative procedure is devised to solute the matrix of the combination coefficient and clustering assignment matrix. NDisKmeans overcomes the limilation of linear mapping of DisKmeans algorithm; therefore, it significantly improves the clustering performance. The systematic and extensive experiments on UCI and real world data sets have shown the effectiveness of the proposed NDisKmeans method.
Related Articles | Metrics
New trend and challenges in 3D video coding
DENG Zhi-pin JIA Ke-bin CHAN Yui-lam FU Chang-hong SIU Wan-chi
Journal of Computer Applications    2011, 31 (09): 2453-2456.   DOI: 10.3724/SP.J.1087.2011.02453
Abstract1268)      PDF (817KB)(512)       Save
The key technologies of 3D video coding were introduced. Firstly, the developing directions and challenges of video-only format and depth-enhancement format 3D videos were elaborated. The depth estimation and view synthesis technologies were analyzed in detail. Subsequently, the process of standardizing the current 3DV/FTV standard of MPEG was summarized. The conclusion and prospect were given at last.
Related Articles | Metrics